24 research outputs found

    Further Model-Based Estimates of U.S. Total Manufacturing Production Capital and Technology, 1949-2005

    Get PDF
    Production capital and technology (i.e., total factor productivity) in U.S. manufacturing are fundamental for understanding output and productivity growth of the U.S. economy but are unobserved at this level of aggregation and must be estimated before being used in empirical analysis. Previously, we developed a method for estimating production capital and technology based on an estimated dynamic structural economic model and applied the method using annual SIC data for 1947-1997 to estimate production capital and technology in U.S. total manufacturing. In this paper, we update this work by reestimating the model and production capital and technology using annual SIC data for 1949-2001 and partly overlapping NAICS data for 1987-2005.Kalman filter estimation of latent variables

    Testing Substitution Bias of the Solow-Residual Measure of Total Factor Productivity Using CES-Class Production Functions

    Get PDF
    Total factor productivity (TFP) computed as Solow-residuals could be subject to input-substitution bias for two reasons. First, the Cobb-Douglas (CD) production function restricts all input substitutions to one. Second, observed inputs generally differ from optimal inputs, so that inputs observed in a sample tend to move not just due to substitution effects but for other reasons as well. In this paper, we describe using the multi-step perturbation method (MSP) to compute and evaluate total factor productivity (TFP) based on any k+1 times differentiable production function, and we illustrate the method for a CES-class production functions. We test the possible input-substitution bias of the Solow-residual measure of TFP in capital, labor, energy, materials, and services (KLEMS) inputs data obtained from the Bureau of Labor Statistics for U.S. manufacturing from 1949 to 2001. We proceed in three steps: (1) We combine the MSP method with maximum likelihood estimation to determine a best 4th-order approximation of a CES-class production function. The CES class includes not only the standard CES production functions but also the so called tiered CES production functions (TCES), in which the prespecified groups of inputs can have their own input-substitution elasticities and input-cost shares are parameterized (i) tightly as constants, (ii) moderately as smooth functions, and (iii) loosely as successive averages. (2) Based on the best estimated production function, we compute the implied best TFP evaluated at the computed optimal inputs. (3) For the data, we compute Solow-residual TFP and compare it with the best TFP. The preliminary results show that the MSP method can produce almost double precision accuracy, and the results reject a single constant elasticity of substitution among all inputs. For this data, the Solow-residual TFP is on average .1% lower, with a .6% standard error, than the best TFP and, hence, is very slightly downward biased, although the sampling-error uncertainty dominates this conclusion. In further work, we shall attempt to reduce this uncertainty with further testing based on more general CES-class production functions, in which each input has its own elasticity of substitution, and we shall use more finely estimated parametersTaylor-series approximation, model selection, numercial solution, tiered CES production function

    Multi-Step Perturbation Solution of Nonlinear Rational Expectations Models

    Get PDF
    This paper develops and illustrates the multi-step generalization of the standard single-step perturbation (SSP) method or MSP. In SSP, we can think of evaluating at x the computed approximate solution based on x0, as moving from x0 to x in "one big step" along the straight-line vector x-x0. By contrast, in MSP we move from x0 to x along any chosen path, continuous, curved-line or connected-straight-line, in h steps of equal length 1/h. If at each step we apply SSP, Taylor-series theory says that the approximation error per step is 0(e) = h^(-k-1), so that the total approximation error in moving from x0 to x in h steps is 0(e) = h^(-k). Thus, MSP has two major advantages over SSP. First, both SSP and MSP accuracy declines as the approximation point, x, moves from the initial point, x0, although only in MSP can the decline be countered by increasing h. Increasing k is much more costly than increasing h, because increasing k requires new derivations of derivatives, more computer programming, more computer storage, and more computer run time. By contrast, increasing h generally requires only more computer run time and often only slightly more. Second, in SSP the initial point is usually a nonstochastic steady state but can sometimes also be set up in function space as the known exact solution of a close but simpler model. This "closeness" of a related, simpler, and known solution can be exploited much more explicitly by MSP, when moving from x0 to x. In MSP, the state space could include parameters, so that the initial point, x0, would represent the simpler model with the known solution, and the final point, x, would continue to represent the model of interest. Then, as we would move from the initial x0 to the final x in h steps, the state variables and parameters would move together from their initial to final values and the model being solved would vary continuously from the simple model to the model of interest. Both advantages of MSP facilitate repeatedly, accurately, and quickly solving a NLRE model in an econometric analysis, over a range of data values, which could differ enough from nonstochastic steady states of the model of interest to render computed SSP solutions, for a given k, inadequately accurate. In the present paper, we extend the derivation of SSP to MSP for k = 4. As we did before, we use a mixture of gradient and differential-form differentiations to derive the MSP computational equations in conventional linear-algebraic form and illustrate them with a version of the stochastic optimal one-sector growth model.numerical solution of dynamic stochastic equilibrium models

    An Empirical Comparison of Methods for Temporal Distribution and Interpolation at the National Accounts

    Get PDF
    This study evaluates five mathematical and five statistical methods for temporal disaggregation in an attempt to select the most suitable method(s) for routine compilation of sub-annual estimates through temporal distribution and interpolation in the national accounts at BEA. The evaluation is conducted using 60 series of annual data from the National Economic Accounts, and the final sub-annual estimates are evaluated according to specific criteria to ensure high quality final estimates that are in compliance with operational policy at the national accounts. The study covers the cases of temporal disaggregation when 1) both annual and sub-annual information is available; 2) only annual data are available; 3) sub-annual estimates have both temporal and contemporaneous constraints; and 4) annual data contain negative values. The estimation results show that the modified Denton proportional first difference method outperforms the other methods, though the Casey-Trager growth preservation model is a close competitor in certain cases. Lagrange polynomial interpolation procedure is inferior to all other methods.

    Estimated U.S. Manufacturing Production Capital and Technology Based on an Estimated Dynamic Structural Economic Model

    Get PDF
    Production capital and total factor productivity or technology are fundamental to understanding output and productivity growth, but are unobserved except at disaggregated levels and must be estimated before being used in empirical analysis. In this paper, we develop estimates of production capital and technology for U.S. total manufacturing based on an estimated dynamic structural economic model. First, using annual U.S. total manufacturing data for 1947-1997, we estimate by maximum likelihood a dynamic structural economic model of a representative production firm. In the estimation, capital and technology are completely unobserved or latent variables. Then, we apply the Kalman filter to the estimated model and the data to compute estimates of model-based capital and technology for the sample. Finally, we describe and evaluate similarities and differences between the model-based and standard estimates of capital and technology reported by the Bureau of Labor Statistics.Kalman filter estimation of latent variables

    Estimated U.S. Manufacturing Production Capital and Technology Based on an Estimated Dynamic Economic Model

    Get PDF
    Production capital and technology, fundamental to understanding output and productivity growth, are unobserved except at disaggregated levels and must be estimated prior to being used in empirical analysis. We develop and apply a new estimation method, based on advances in economics, statistics, and applied mathematics, which involves estimating a structural dynamic economic model of a representative production firm and using the estimated model to compute Kalman-filtered estimates of capital and technology for the sample period. We apply the method to annual data from 1947-97 for U.S. total manufacturing and compare the estimates with those reported by the Bureau of Labor Statistics.Kalman filter estimation of unobserved state variables

    Multi-Step Perturbation Solution of Nonlinear Rational Expectations Models

    Get PDF
    Recently, perturbation has received attention as a numerical method for computing an approximate solution of a nonlinear dynamic stochastic model, which we call a nonlinear rational expectations (NLRE) model. To date perturbation methods have been described and applied as single-step perturbation (SSP). If a solution of an NDS model is a function f(x) of vector x, then, SSP aims to compute a kth-order Taylor approximation of f(x), centered at x0. In classical SSP, where x0 is a nonstochastic steady state of the dynamical system, a kth-order approximation is accurate on the order of ||dx|| to the power k+1, where dx = x - x0 and ||.|| is a vector norm. Thus, for given k and computed x0, classical SSP is accurate only locally, near x0. SSP's accuracy can be improved only by increasing k, which beyond small values results in large computing costs, especially for deriving kth-order analytical derivatives of the model's equations. So far, research has not fully solved the problem in SSP of maintaining any desired accuracy while freeing x0 from the nonstochastic steady state, so that, for given k, SSP can be arbitrarily accurate for any dx. Multi-step perturbation (MSP) fully solves this problem and, thus, globalizes SSP. In SSP, we approximate d(x) with a single Taylor approximation centered at x0 and, thus, effectively move from x0 to x in one step. In MSP, we move in a straight line from x0 to x in h steps of equal length. At each step, we approximate f at the x at the end of the step with a Taylor approximation centered at the x at the beginning of the step. After h steps and Taylor approximations, we obtain an approximation of f(x) which is accurate on the order of h to the power -k. Thus, although in MSP we also set x0 to a nonstochastic steady state, unlike in SSP, we can achieve any desired accuracy for any x0, x, and k, simply by using sufficiently many steps. Thus, we free the accuracy from dependence on k and ||dx|| and effectively globalize SSP. Whereas increasing k requires new derivations and programming, increasing h requires only passing more times through an already programmed loop, typically at only moderately more computing time. In the paper, we derive an MSP algorithm in standard linear-algebraic notation, for a 4th-order approximation of a general NLRE model, and illustrate the algorithm and its accuracy by applying it to a stochastic one-sector optimal growth modelsolving dynamic stochastic equilibrium models, 4th-order approximation

    A Balanced System of Industry Accounts for the U.S. and Structural Distribution of Statistical Discrepancy

    No full text
    This paper describes and illustrates a generalized least squares (GLS) reconciliation method that can efficiently incorporate all available information on initial data in reconciling a large system of disaggregated accounts and can accurately estimate industry distribution of statistical discrepancy. The GLS reconciliation method is applied to reconciling the 1997 GDP-by-industry accounts and the Input-output accounts. The GDP-by-industry accounts measure GDP by industry using industry gross income, and the input-output accounts measure GDP by industry as the residual between gross output and intermediate inputs. The GLS method produced balanced estimates and estimated the industry distribution of the statisical discrepancy. The results show that using reliability to reconcile different accounts produces statistically meaningful balanced estimates. The study demonstrates that reconciling a large system of disaggregated accounts is empirically feasible and computationally efficient.
    corecore